20 research outputs found
Generative Models for Preprocessing of Hospital Brain Scans
I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions
MRI Super-Resolution using Multi-Channel Total Variation
This paper presents a generative model for super-resolution in routine
clinical magnetic resonance images (MRI), of arbitrary orientation and
contrast. The model recasts the recovery of high resolution images as an
inverse problem, in which a forward model simulates the slice-select profile of
the MR scanner. The paper introduces a prior based on multi-channel total
variation for MRI super-resolution. Bias-variance trade-off is handled by
estimating hyper-parameters from the low resolution input scans. The model was
validated on a large database of brain images. The validation showed that the
model can improve brain segmentation, that it can recover anatomical
information between images of different MR contrasts, and that it generalises
well to the large variability present in MR images of different subjects. The
implementation is freely available at https://github.com/brudfors/spm_superre
Factorisation-Based Image Labelling
Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labelling. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts
Joint Total Variation ESTATICS for Robust Multi-Parameter Mapping
Quantitative magnetic resonance imaging (qMRI) derives tissue-specific
parameters -- such as the apparent transverse relaxation rate R2*, the
longitudinal relaxation rate R1 and the magnetisation transfer saturation --
that can be compared across sites and scanners and carry important information
about the underlying microstructure. The multi-parameter mapping (MPM) protocol
takes advantage of multi-echo acquisitions with variable flip angles to extract
these parameters in a clinically acceptable scan time. In this context,
ESTATICS performs a joint loglinear fit of multiple echo series to extract R2*
and multiple extrapolated intercepts, thereby improving robustness to motion
and decreasing the variance of the estimators. In this paper, we extend this
model in two ways: (1) by introducing a joint total variation (JTV) prior on
the intercepts and decay, and (2) by deriving a nonlinear maximum \emph{a
posteriori} estimate. We evaluated the proposed algorithm by predicting
left-out echoes in a rich single-subject dataset. In this validation, we
outperformed other state-of-the-art methods and additionally showed that the
proposed approach greatly reduces the variance of the estimated maps, without
introducing bias.Comment: 11 pages, 2 figures, 1 table, conference paper, accepted at MICCAI
202
An Algorithm for Learning Shape and Appearance Models without Annotations
This paper presents a framework for automatically learning shape and
appearance models for medical (and certain other) images. It is based on the
idea that having a more accurate shape and appearance model leads to more
accurate image registration, which in turn leads to a more accurate shape and
appearance model. This leads naturally to an iterative scheme, which is based
on a probabilistic generative model that is fit using Gauss-Newton updates
within an EM-like framework. It was developed with the aim of enabling
distributed privacy-preserving analysis of brain image data, such that shared
information (shape and appearance basis functions) may be passed across sites,
whereas latent variables that encode individual images remain secure within
each site. These latent variables are proposed as features for
privacy-preserving data mining applications.
The approach is demonstrated qualitatively on the KDEF dataset of 2D face
images, showing that it can align images that traditionally require shape and
appearance models trained using manually annotated data (manually defined
landmarks etc.). It is applied to MNIST dataset of handwritten digits to show
its potential for machine learning applications, particularly when training
data is limited. The model is able to handle ``missing data'', which allows it
to be cross-validated according to how well it can predict left-out voxels. The
suitability of the derived features for classifying individuals into patient
groups was assessed by applying it to a dataset of over 1,900 segmented
T1-weighted MR images, which included images from the COBRE and ABIDE datasets.Comment: 61 pages, 16 figures (some downsampled by a factor of 4), submitted
to MedI
Model-based multi-parameter mapping
Quantitative MR imaging is increasingly favoured for its richer information
content and standardised measures. However, computing quantitative parameter
maps, such as those encoding longitudinal relaxation rate (R1), apparent
transverse relaxation rate (R2*) or magnetisation-transfer saturation (MTsat),
involves inverting a highly non-linear function. Many methods for deriving
parameter maps assume perfect measurements and do not consider how noise is
propagated through the estimation procedure, resulting in needlessly noisy
maps. Instead, we propose a probabilistic generative (forward) model of the
entire dataset, which is formulated and inverted to jointly recover (log)
parameter maps with a well-defined probabilistic interpretation (e.g., maximum
likelihood or maximum a posteriori). The second order optimisation we propose
for model fitting achieves rapid and stable convergence thanks to a novel
approximate Hessian. We demonstrate the utility of our flexible framework in
the context of recovering more accurate maps from data acquired using the
popular multi-parameter mapping protocol. We also show how to incorporate a
joint total variation prior to further decrease the noise in the maps, noting
that the probabilistic formulation allows the uncertainty on the recovered
parameter maps to be estimated. Our implementation uses a PyTorch backend and
benefits from GPU acceleration. It is available at
https://github.com/balbasty/nitorch.Comment: 20 pages, 6 figures, accepted at Medical Image Analysi
Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation
Vision transformers are effective deep learning models for vision tasks,
including medical image segmentation. However, they lack efficiency and
translational invariance, unlike convolutional neural networks (CNNs). To model
long-range interactions in 3D brain lesion segmentation, we propose an
all-convolutional transformer block variant of the U-Net architecture. We
demonstrate that our model provides the greatest compromise in three factors:
performance competitive with the state-of-the-art; parameter efficiency of a
CNN; and the favourable inductive biases of a transformer. Our public
implementation is available at https://github.com/liamchalcroft/MDUNet
ConoSurf: Open-source 3D scanning system based on a conoscopic holography device for acquiring surgical surfaces
Background. A difficulty in computer-assisted interventions is acquiring the patient's anatomy intraoperatively. Standard modalities have several limitations: low image quality (ultrasound), radiation exposure (computed tomography) or high costs (magnetic resonance imaging). An alternative approach uses a tracked pointer; however, the pointer causes tissue deformation and requires sterilizing. Recent proposals, utilizing a tracked conoscopic holography device, have shown promising results without the previously mentioned drawbacks. Methods. We have developed an open-source software system that enables real-time surface scanning using a conoscopic holography device and a wide variety of tracking systems, integrated into pre-existing and well-supported software solutions. Results. The mean target registration error of point measurements was 1.46 mm. For a quick guidance scan, surface reconstruction improved the surface registration error compared with point-set registration. Conclusions. We have presented a system enabling real-time surface scanning using a tracked conoscopic holography device. Results show that it can be useful for acquiring the patient's anatomy during surgery.Funding information: (Comunidad de Madrid), Grant/Award
Number: TOPUS‐CM S2013/MIT‐3024; (Ministerio de Economía y Competitividad,
ISCIII), Grant/Award Number: PI15/02121. DTS14/00192. TEC2013–48251‐C2–1‐R,
FEDER fund
Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning
Image registration is a fundamental medical image analysis task, and a wide
variety of approaches have been proposed. However, only a few studies have
comprehensively compared medical image registration approaches on a wide range
of clinically relevant tasks. This limits the development of registration
methods, the adoption of research advances into practice, and a fair benchmark
across competing approaches. The Learn2Reg challenge addresses these
limitations by providing a multi-task medical image registration data set for
comprehensive characterisation of deformable registration algorithms. A
continuous evaluation will be possible at
https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of
anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR),
availability of annotations, as well as intra- and inter-patient registration
evaluation. We established an easily accessible framework for training and
validation of 3D registration methods, which enabled the compilation of results
of over 65 individual method submissions from more than 20 unique teams. We
used a complementary set of metrics, including robustness, accuracy,
plausibility, and runtime, enabling unique insight into the current
state-of-the-art of medical image registration. This paper describes datasets,
tasks, evaluation methods and results of the challenge, as well as results of
further analysis of transferability to new datasets, the importance of label
supervision, and resulting bias. While no single approach worked best across
all tasks, many methodological aspects could be identified that push the
performance of medical image registration to new state-of-the-art performance.
Furthermore, we demystified the common belief that conventional registration
methods have to be much slower than deep-learning-based methods